Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            As content moderation becomes a central aspect of all social media platforms and online communities, interest has grown in how to make moderation decisions contestable. On social media platforms where individual communities moderate their own activities, the responsibility to address user appeals falls on volunteers from within the community. While there is a growing body of work devoted to understanding and supporting the volunteer moderators' workload, little is known about their practice of handling user appeals. Through a collaborative and iterative design process with Reddit moderators, we found that moderators spend considerable effort in investigating user ban appeals and desired to directly engage with users and retain their agency over each decision. To fulfill their needs, we designed and built AppealMod, a system that induces friction in the appeals process by asking users to provide additional information before their appeals are reviewed by human moderators. In addition to giving moderators more information, we expected the friction in the appeal process would lead to a selection effect among users, with many insincere and toxic appeals being abandoned before getting any attention from human moderators. To evaluate our system, we conducted a randomized field experiment in a Reddit community of over 29 million users that lasted for four months. As a result of the selection effect, moderators viewed only 30% of initial appeals and less than 10% of the toxically worded appeals; yet they granted roughly the same number of appeals when compared with the control group. Overall, our system is effective at reducing moderator workload and minimizing their exposure to toxic content while honoring their preference for direct engagement and agency in appeals.more » « less
- 
            While cross-partisan conversations are central to a vibrant deliberative democracy, these conversations are hard to have, especially amidst unprecedented levels of partisan animosity we observe today. We report on a qualitative study of 17 US residents who engage with outpartisans on Reddit to understand what they look for in these interactions, and the strategies they adopt. We find that users have multiple, sometimes contradictory expectations of these conversations, ranging from deliberative discussions to entertainment and banter. In aiming to foster 'good' cross-partisan discussions, users make strategic choices on which subreddits to participate in, who to engage with and how to talk to outpartisans, often establishing common ground, complimenting, and remaining dispassionate in their interactions. Further, contrary to offline settings where knowing more about outpartisan interlocutors help manage disagreements, on Reddit, users look to actively learn as little as possible about them for fear that such information may bias their interactions. However, through design probes, we find that users are actually open to knowing certain kinds of information about their interlocutors, such as non-political subreddits that they both participate in, and to having that information made visible to their interlocutors. However, making other information visible, such as the other subreddits that they participate in or their past comments, though potentially humanizing, raises concerns around privacy and misuse of that information for personal attacks especially among women and minority groups. Finally, we identify important challenges and opportunities in designing to improve online cross-partisan interactions in today's hyper-polarized environment.more » « less
- 
            Budak, Ceren; Cha, Meeyoung; Quercia, Daniele; Xie, Lexing (Ed.)We present the first large-scale measurement study of cross-partisan discussions between liberals and conservatives on YouTube, based on a dataset of 274,241 political videos from 973 channels of US partisan media and 134M comments from 9.3M users over eight months in 2020. Contrary to a simple narrative of echo chambers, we find a surprising amount of cross-talk: most users with at least 10 comments posted at least once on both left-leaning and right-leaning YouTube channels. Cross-talk, however, was not symmetric. Based on the user leaning predicted by a hierarchical attention model, we find that conservatives were much more likely to comment on left-leaning videos than liberals on right-leaning videos. Secondly, YouTube's comment sorting algorithm made cross-partisan comments modestly less visible; for example, comments from conservatives made up 26.3% of all comments on left-leaning videos but just over 20% of the comments were in the top 20 positions. Lastly, using Perspective API's toxicity score as a measure of quality, we find that conservatives were not significantly more toxic than liberals when users directly commented on the content of videos. However, when users replied to comments from other users, we find that cross-partisan replies were more toxic than co-partisan replies on both left-leaning and right-leaning videos, with cross-partisan replies being especially toxic on the replier's home turf.more » « less
- 
            Budak, Ceren; Cha, Meeyoung; Quercia, Daniele; Xie, Lexing (Ed.)Research on online political communication has primarily focused on content in explicitly political spaces. In this work, we set out to determine the amount of political talk missed using this approach. Focusing on Reddit, we estimate that nearly half of all political talk takes place in subreddits that host political content less than 25% of the time. In other words, cumulatively, political talk in non-political spaces is abundant. We further examine the nature of political talk and show that political conversations are less toxic in non-political subreddits. Indeed, the average toxicity of political comments replying to a out-partisan in non-political subreddits is less than even the toxicity of co-partisan replies in explicitly political subreddits.more » « less
- 
            The past decade in the US has been one of the most politically polarizing in recent memory. Ordinary Democrats and Republicans fundamentally dislike and distrust each other, even when they agree on policy issues. This increase in hostility towards opposing party supporters, commonly called affective polarization, has important ramifications that threaten democracy. Political science research suggests that at least part of this polarization stems from Democrats' misperceptions about Republicans' political views and vice-versa. Therefore, in this work, drawing on insights from political science and game studies research, we designed an online casual game that combines the relaxed, playful nonpartisan norms of casual games with corrective information about party supporters' political views that are often misperceived. Through an experiment, we found that playing the game significantly reduces negative feelings toward outparty supporters among Democrats, but not Republicans. It was also effective in improving willingness to talk politics with outparty supporters. Further, we identified psychological reactance as a potential mechanism that affects the effectiveness of depolarization interventions. Finally, our analyses suggest that the game versions with political content were rated to be just as fun to play as a game version without any political content suggesting that, contrary to popular belief, people do like to mix politics and play.more » « less
- 
            null (Ed.)We present an experimental assessment of the impact of feature attribution-style explanations on human performance in predicting the consensus toxicity of social media posts with advice from an unreliable machine learning model. By doing so we add to a small but growing body of literature inspecting the utility of interpretable machine learning in terms of human outcomes. We also evaluate interpretable machine learning for the first time in the important domain of online toxicity, where fully-automated methods have faced criticism as being inadequate as a measure of toxic behavior. We find that, contrary to expectations, explanations have no significant impact on accuracy or agreement with model predictions, through they do change the distribution of subject error somewhat while reducing the cognitive burden of the task for subjects. Our results contribute to the recognition of an intriguing expectation gap in the field of interpretable machine learning between the general excitement the field has engendered and the ambiguous results of recent experimental work, including this study.more » « less
- 
            null (Ed.)Online communities about similar topics may maintain very different norms of interaction. Past research identifies many processes that contribute to maintaining stable norms, including self-selection, pre-entry learning, post-entry learning, and retention. We analyzed political subreddits that had distinctive, stable levels of toxic comments on Reddit, in order to identify the relative contribution of these four processes. Surprisingly, we find that the largest source of norm stability is pre-entry learning. That is, newcomers' first comments in these distinctive subreddits differ from those same people's prior behavior in other subreddits. Through this adjustment, they nearly match the toxicity level of the subreddit they are joining. We also show that behavior adjustments are community-specific and not broadly transformative. That is, people continue to post toxic comments at their previous rates in other political subreddits. Thus, we conclude that in political subreddits, compatible newcomers are neither born nor made– they make local adjustments on their own.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available